合作多代理增强学习(CMARL)具有许多真实的应用程序,但是在部署时,现有CMARL算法培训的政策不够强大。关于RL系统的对抗攻击也存在许多方法,这意味着RL系统可能会遭受对抗攻击,但大多数都集中在单个代理RL上。在本文中,我们在CMARL系统上提出了一个\ textit {稀疏对抗攻击}。我们将(MA)RL与正规化一起训练攻击政策。我们的实验表明,当当前CMARL算法训练的政策可以在团队中只有一名或几个代理(例如,25个中的1个或5个中的1个)在几个时间段攻击时(例如,攻击3的总数3或5)可以获得较差的性能40个时间段)。
translated by 谷歌翻译
我们展示了MVLayoutNet,是来自多视图全景的整体三维重建端到端网络。我们的核心贡献是无缝地将学习的单目布局估计和多视图立体声(MV)结合起来,以便在3D和图像空间中准确地重建。我们共同列出布局模块以产生初始布局和新型MVS模块,以获得精确的布局几何形状。与标准MVSNET [33]不同,我们的MVS模块采用新建的布局成本卷,其在相同的深度层中聚合到相应的布局元件中的多视图成本。我们还提供了一种基于注意的方案,指导MVS模块专注于结构区域。这种设计考虑了本地像素级成本和全球整体信息,以便更好地重建。实验表明,我们的方法在2D-3D-S [1]和Zind [5]数据集中,在深度RMSE方面以21.7%和20.6%表示最先进的。最后,我们的方法导致连贯的布局几何,使整个场景的重建能够。
translated by 谷歌翻译
考虑到RDF三元组的集合,RDF到文本生成任务旨在生成文本描述。最先前的方法使用序列到序列模型或使用基于图形的模型来求解此任务以编码RDF三维并生成文本序列。然而,这些方法未能明确模拟RDF三元组之间的本地和全球结构信息。此外,以前的方法也面临了生成文本的低信任问题的不可忽略的问题,这严重影响了这些模型的整体性能。为了解决这些问题,我们提出了一种组合两个新的图形增强结构神经编码器的模型,共同学习输入的RDF三元组中的本地和全局结构信息。为了进一步改进文本忠诚,我们创新地根据信息提取(即)引进了强化学习(RL)奖励。我们首先使用佩带的IE模型从所生成的文本中提取三元组,并将提取的三级的正确数量视为额外的RL奖励。两个基准数据集上的实验结果表明,我们所提出的模型优于最先进的基线,额外的加强学习奖励确实有助于改善所生成的文本的忠诚度。
translated by 谷歌翻译
知识图表问题基于信息检索旨在通过从大型知识图表中检索答案来回答问题来回答(即,kgqa)。大多数现有方法首先粗略地检索可能包含候选答案的知识子图(KSG),然后搜索子图中的确切答案。然而,粗略检索的KSG可以包含数千个候选节点,因为查询中涉及的知识图通常是大规模的。为了解决这个问题,我们首先建议通过新的子图分区算法将检索到的ksg分区为几个较小的子ksgs,然后呈现一个图形增强学习,以便测量模型以从中选择排名的子ksgs。我们所提出的模型结合了新的子图匹配网络,以捕获问题和子图中的全局交互以及增强的双边多视角匹配模型,以捕获局部交互。最后,我们分别在全KSG和排名级分ksg上应用答案选择模型,以验证我们提出的图形增强学习的效果。多个基准数据集的实验结果表明了我们方法的有效性。
translated by 谷歌翻译
Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译
In this work, we investigate improving the generalizability of GAN-generated image detectors by performing data augmentation in the fingerprint domain. Specifically, we first separate the fingerprints and contents of the GAN-generated images using an autoencoder based GAN fingerprint extractor, followed by random perturbations of the fingerprints. Then the original fingerprints are substituted with the perturbed fingerprints and added to the original contents, to produce images that are visually invariant but with distinct fingerprints. The perturbed images can successfully imitate images generated by different GANs to improve the generalization of the detectors, which is demonstrated by the spectra visualization. To our knowledge, we are the first to conduct data augmentation in the fingerprint domain. Our work explores a novel prospect that is distinct from previous works on spatial and frequency domain augmentation. Extensive cross-GAN experiments demonstrate the effectiveness of our method compared to the state-of-the-art methods in detecting fake images generated by unknown GANs.
translated by 谷歌翻译
The success of deep learning is partly attributed to the availability of massive data downloaded freely from the Internet. However, it also means that users' private data may be collected by commercial organizations without consent and used to train their models. Therefore, it's important and necessary to develop a method or tool to prevent unauthorized data exploitation. In this paper, we propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners. Specifically, the noise produced by the generator for each image has the confounder property. It can build spurious correlations between images and labels, so that the model cannot learn the correct mapping from images to labels in this noise-added dataset. Meanwhile, the discriminator is used to ensure that the generated noise is small and imperceptible, thereby remaining the normal utility of the encrypted image for humans. The experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets. The results demonstrate that our method not only outperforms state-of-the-art methods in standard settings, but can also be applied to fast encryption scenarios. Moreover, we show a series of transferability and stability experiments to further illustrate the effectiveness and superiority of our method.
translated by 谷歌翻译
Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data will be made available at \url{https://jingsenzhu.github.io/invrend}.
translated by 谷歌翻译
Google,Amazon和Microsoft等提供商提供的商业ML API已在许多应用程序中大大简化了ML的采用。许多公司和学者都为使用ML API用于对象检测,OCR和情感分析等任务。处理相同任务的不同ML API可能具有非常异构的性能。此外,API的基础模型也随着时间的推移而发展。随着ML API迅速成为一个有价值的市场,并且是消耗机器学习的广泛方式,因此系统地研究和比较不同的API并表征API随时间变化的方式至关重要。但是,由于缺乏数据,目前该主题目前没有被忽视。在本文中,我们介绍了HAPI(API的历史),该数据集由1,761,417个商业ML API应用程序(涉及来自亚马逊,Google,IBM,Microsoft和其他提供商的API),包括图像标签,文本识别和文本识别和文本识别和文本,从2020年到2022年的挖掘。每个实例都由API的查询输入(例如图像或文本)以及API的输出预测/注释和置信分数组成。 HAPI是ML API使用情况的第一个大型数据集,并且是研究ML-AS-A-Service(MLAAS)的独特资源。作为HAPI启用的分析类型的示例,我们表明ML API的性能会随着时间的流逝而大幅变化 - 在特定基准数据集上删除了几个API的精度。即使API的汇总性能保持稳定,其误差模式也可以在2020年至2022年之间在不同的数据子类型中转移。这种更改可能会大大影响使用某些ML API作为组件的整个分析管道。随着时间的流逝,我们进一步使用HAPI研究人口亚组的商业API绩效差异。 HAPI可以刺激MLAA的不断发展领域的更多研究。
translated by 谷歌翻译
在马尔可夫决策过程(MDP)中,可能存在不可观察的混杂因素并对数据生成过程产生影响,因此经典的非政策评估(OPE)估计器可能无法识别目标策略的真实价值函数。在本文中,我们研究了与可观察的仪器变量混杂的MDP中OPE的统计特性。具体而言,我们根据仪器变量提出了一个两阶段估计器,并在具有线性结构的混杂MDP中建立了其统计属性。对于非反应分析,我们证明了一个$ \ Mathcal {o}(n^{ - 1/2})$ - 错误绑定了$ n $是样本的数量。对于渐近分析,我们证明了两阶段估计量在渐近正常上,典型速率为$ n^{1/2} $。据我们所知,我们是第一个通过仪器变量显示混合线性MDP的两阶段估计量的统计结果。
translated by 谷歌翻译